Goto

Collaborating Authors

 fight bias


Twitter aims to fight bias by examining its own machine learning algorithms

#artificialintelligence

Twitter's Responsible ML will be acting on four pillars that it believes represent a responsible view of machine learning technology: Taking responsibility …


Howard discusses Sex, Race, and Robotics and how to fight bias in AI

#artificialintelligence

BEGIN ARTICLE PREVIEW: Listen to this article Ayanna Howard with a Dynamic Anthropomorphic Robot with Intelligence-Open Platform (DARwIn-OP). Source: Rob Felt, Georgia Institute of Technology Headlines regularly proclaim that robots are coming for people’s jobs or are “creepy,” but both robotics developers and the general public are increasingly aware of the many ways in which the technology can boost productivity and safety. However, the need to understand how robots and artificial intelligence can inherit negative human biases is still urgent, according to roboticist Ayanna Howard. “Bias in AI is the responsibility of the designer,” said Howard, who recently published the book Sex, Race, and Robots: How to Be Human in the A


Startup launches world's first genderless AI to fight bias in smart assistants

Daily Mail - Science & tech

Talk to Apple's Siri or Amazon's Alexa and you'll notice a common trait: They both have female voices. While this can help make robotic assistants more relatable and natural to converse with, it has assigned a gender to a technology that's otherwise genderless. Now, researchers are hoping to offer a new alternative by launching what they're calling the world's first'genderless voice.' To create'Q', researchers recorded voices from participants who identify as non-binary, or neither exclusively female nor male. Researchers then tested the voice on 4,600 people across Europe.


IBM hopes to fight bias in facial recognition with new diverse dataset

#artificialintelligence

Bias is a big problem in facial recognition, with studies showing that commercial systems are more accurate if you're white and male. Part of the reason for this is a lack of diversity in the training data, with people of color appearing less frequently than their peers. IBM is one of the companies trying to combat this problem, and today announced two new public datasets that anyone can use to train facial recognition systems -- one of which has been curated specifically to help remove bias. The first dataset contains 1 million images and will help train systems that can spot specific attributes, like hair color, eye color, and facial hair. Each face is annotated with these characteristics, making it easier for programmers to hone their systems to better distinguish between, say, a goatee and a soul patch.


Justice Can't Be Colorblind: How to Fight Bias with Predictive Policing

@machinelearnbot

Originally published by Scientific American. Law enforcement's use of predictive analytics recently came under fire again. Dartmouth researchers made waves reporting that simple predictive models--as well as nonexpert humans--predict crime just as well as the leading proprietary analytics software. That the leading software achieves (only) human-level performance might not actually be a deadly blow, but a flurry of press from dozens of news outlets has quickly followed. In any case, even as this disclosure raises questions about one software tool's credibility, a more enduring, inherent quandary continues to plague predictive policing.